285 research outputs found

    Differentiating glaucoma from chiasmal compression using optical coherence tomography: the macular naso-temporal ratio

    Get PDF
    BACKGROUND/AIMS: The analysis of visual field loss patterns is clinically useful to guide differential diagnosis of visual pathway pathology. This study investigates whether a novel index of macular atrophy patterns can discriminate between chiasmal compression and glaucoma. METHODS: A retrospective series of patients with preoperative chiasmal compression, primary open-angle glaucoma (POAG) and healthy controls. Macular optical coherence tomography (OCT) images were analysed for the macular ganglion cell and inner plexiform layer (mGCIPL) thickness. The nasal hemi-macula was compared with the temporal hemi-macula to derive the macular naso-temporal ratio (mNTR). Differences between groups and diagnostic accuracy were explored with multivariable linear regression and the area under the receiver operating characteristic curve (AUC). RESULTS: We included 111 individuals (31 with chiasmal compression, 30 with POAG and 50 healthy controls). Compared with healthy controls, the mNTR was significantly greater in POAG cases (β=0.07, 95% CI 0.03 to 0.11, p=0.001) and lower in chiasmal compression cases (β=-0.12, 95% CI -0.16 to -0.09, p<0.001), even though overall mGCIPL thickness did not discriminate between these pathologies (p=0.36). The mNTR distinguished POAG from chiasmal compression with an AUC of 95.3% (95% CI 90% to 100%). The AUCs when comparing healthy controls to POAG and chiasmal compression were 79.0% (95% CI 68% to 90%) and 89.0% (95% CI 80% to 98%), respectively. CONCLUSIONS: The mNTR can distinguish between chiasmal compression and POAG with high discrimination. This ratio may provide utility over-and-above previously reported sectoral thinning metrics. Incorporation of mNTR into the output of OCT instruments may aid earlier diagnosis of chiasmal compression

    Evaluating an automated machine learning model that predicts visual acuity outcomes in patients with neovascular age-related macular degeneration

    Get PDF
    PURPOSE: Neovascular age-related macular degeneration (nAMD) is a major global cause of blindness. Whilst anti-vascular endothelial growth factor (anti-VEGF) treatment is effective, response varies considerably between individuals. Thus, patients face substantial uncertainty regarding their future ability to perform daily tasks. In this study, we evaluate the performance of an automated machine learning (AutoML) model which predicts visual acuity (VA) outcomes in patients receiving treatment for nAMD, in comparison to a manually coded model built using the same dataset. Furthermore, we evaluate model performance across ethnic groups and analyse how the models reach their predictions. METHODS: Binary classification models were trained to predict whether patients' VA would be 'Above' or 'Below' a score of 70 one year after initiating treatment, measured using the Early Treatment Diabetic Retinopathy Study (ETDRS) chart. The AutoML model was built using the Google Cloud Platform, whilst the bespoke model was trained using an XGBoost framework. Models were compared and analysed using the What-if Tool (WIT), a novel model-agnostic interpretability tool. RESULTS: Our study included 1631 eyes from patients attending Moorfields Eye Hospital. The AutoML model (area under the curve [AUC], 0.849) achieved a highly similar performance to the XGBoost model (AUC, 0.847). Using the WIT, we found that the models over-predicted negative outcomes in Asian patients and performed worse in those with an ethnic category of Other. Baseline VA, age and ethnicity were the most important determinants of model predictions. Partial dependence plot analysis revealed a sigmoidal relationship between baseline VA and the probability of an outcome of 'Above'. CONCLUSION: We have described and validated an AutoML-WIT pipeline which enables clinicians with minimal coding skills to match the performance of a state-of-the-art algorithm and obtain explainable predictions

    AutoMorph: Automated Retinal Vascular Morphology Quantification Via a Deep Learning Pipeline

    Get PDF
    Purpose: To externally validate a deep learning pipeline (AutoMorph) for automated analysis of retinal vascular morphology on fundus photographs. AutoMorph has been made publicly available, facilitating widespread research in ophthalmic and systemic diseases. Methods: AutoMorph consists of four functional modules: image preprocessing, image quality grading, anatomical segmentation (including binary vessel, artery/vein, and optic disc/cup segmentation), and vascular morphology feature measurement. Image quality grading and anatomical segmentation use the most recent deep learning techniques. We employ a model ensemble strategy to achieve robust results and analyze the prediction confidence to rectify false gradable cases in image quality grading. We externally validate the performance of each module on several independent publicly available datasets. Results: The EfficientNet-b4 architecture used in the image grading module achieves performance comparable to that of the state of the art for EyePACS-Q, with an F1-score of 0.86. The confidence analysis reduces the number of images incorrectly assessed as gradable by 76%. Binary vessel segmentation achieves an F1-score of 0.73 on AV-WIDE and 0.78 on DR HAGIS. Artery/vein scores are 0.66 on IOSTAR-AV, and disc segmentation achieves 0.94 in IDRID. Vascular morphology features measured from the AutoMorph segmentation map and expert annotation show good to excellent agreement. Conclusions: AutoMorph modules perform well even when external validation data show domain differences from training data (e.g., with different imaging devices). This fully automated pipeline can thus allow detailed, efficient, and comprehensive analysis of retinal vascular morphology on color fundus photographs. Translational Relevance: By making AutoMorph publicly available and open source, we hope to facilitate ophthalmic and systemic disease research, particularly in the emerging field of oculomics

    Visual dysfunction is a better predictor than retinal thickness for dementia in Parkinson's disease

    Get PDF
    BACKGROUND: Dementia is a common and devastating symptom of Parkinson's disease (PD). Visual function and retinal structure are both emerging as potentially predictive for dementia in Parkinson's but lack longitudinal evidence. METHODS: We prospectively examined higher order vision (skew tolerance and biological motion) and retinal thickness (spectral domain optical coherence tomography) in 100 people with PD and 29 controls, with longitudinal cognitive assessments at baseline, 18 months and 36 months. We examined whether visual and retinal baseline measures predicted longitudinal cognitive scores using linear mixed effects models and whether they predicted onset of dementia, death and frailty using time-to-outcome methods. RESULTS: Patients with PD with poorer baseline visual performance scored lower on a composite cognitive score (β=0.178, SE=0.05, p=0.0005) and showed greater decreases in cognition over time (β=0.024, SE=0.001, p=0.013). Poorer visual performance also predicted greater probability of dementia (χ² (1)=5.2, p=0.022) and poor outcomes (χ² (1) =10.0, p=0.002). Baseline retinal thickness of the ganglion cell-inner plexiform layer did not predict cognitive scores or change in cognition with time in PD (β=-0.013, SE=0.080, p=0.87; β=0.024, SE=0.001, p=0.12). CONCLUSIONS: In our deeply phenotyped longitudinal cohort, visual dysfunction predicted dementia and poor outcomes in PD. Conversely, retinal thickness had less power to predict dementia. This supports mechanistic models for Parkinson's dementia progression with onset in cortical structures and shows potential for visual tests to enable stratification for clinical trials

    Determinants of non-attendance at face-to-face and telemedicine ophthalmic consultations

    Get PDF
    BACKGROUND/AIMS: Evaluation of telemedicine care models has highlighted its potential for exacerbating healthcare inequalities. This study seeks to identify and characterise factors associated with non-attendance across face-to-face and telemedicine outpatient appointments. METHODS: A retrospective cohort study at a tertiary-level ophthalmic institution in the UK, between 1 January 2019 and 31 October 2021. Logistic regression modelled non-attendance against sociodemographic, clinical and operational exposure variables for all new patient registrations across five delivery modes: asynchronous, synchronous telephone, synchronous audiovisual and face to face prior to the pandemic and face to face during the pandemic. RESULTS: A total of 85 924 patients (median age 55 years, 54.4% female) were newly registered. Non-attendance differed significantly by delivery mode: (9.0% face to face prepandemic, 10.5% face to face during the pandemic, 11.7% asynchronous and 7.8%, synchronous during pandemic). Male sex, greater levels of deprivation, a previously cancelled appointment and not self-reporting ethnicity were strongly associated with non-attendance across all delivery modes. Individuals identifying as black ethnicity had worse attendance in synchronous audiovisual clinics (adjusted OR 4.24, 95% CI 1.59 to 11.28) but not asynchronous. Those not self-reporting their ethnicity were from more deprived backgrounds, had worse broadband access and had significantly higher non-attendance across all modes (all p<0.001). CONCLUSION: Persistent non-attendance among underserved populations attending telemedicine appointments highlights the challenge digital transformation faces for reducing healthcare inequalities. Implementation of new programmes should be accompanied by investigation into the differential health outcomes of vulnerable populations

    Phenotyping of ABCA4 Retinopathy by Machine Learning Analysis of Full-Field Electroretinography

    Get PDF
    PURPOSE: Biallelic pathogenic variants in ABCA4 are the commonest cause of monogenic retinal disease. The full-field electroretinogram (ERG) quantifies severity of retinal dysfunction. We explored application of machine learning in ERG interpretation and in genotype–phenotype correlations. METHODS: International standard ERGs in 597 cases of ABCA4 retinopathy were classified into three functional phenotypes by human experts: macular dysfunction alone (group 1), or with additional generalized cone dysfunction (group 2), or both cone and rod dysfunction (group 3). Algorithms were developed for automatic selection and measurement of ERG components and for classification of ERG phenotype. Elastic-net regression was used to quantify severity of specific ABCA4 variants based on effect on retinal function. RESULTS: Of the cohort, 57.6%, 7.4%, and 35.0% fell into groups 1, 2, and 3 respectively. Compared with human experts, automated classification showed overall accuracy of 91.8% (SE, 0.169), and 96.7%, 39.3%, and 93.8% for groups 1, 2, and 3. When groups 2 and 3 were combined, the average holdout group accuracy was 93.6% (SE, 0.142). A regression model yielded phenotypic severity scores for the 47 commonest ABCA4 variants. CONCLUSIONS: This study quantifies prevalence of phenotypic groups based on retinal function in a uniquely large single-center cohort of patients with electrophysiologically characterized ABCA4 retinopathy and shows applicability of machine learning. Novel regression-based analyses of ABCA4 variant severity could identify individuals predisposed to severe disease. Translational Relevance: Machine learning can yield meaningful classifications of ERG data, and data-driven scoring of genetic variants can identify patients likely to benefit most from future therapies

    AlzEye: longitudinal record-level linkage of ophthalmic imaging and hospital admissions of 353 157 patients in London, UK

    Get PDF
    PURPOSE: Retinal signatures of systemic disease ('oculomics') are increasingly being revealed through a combination of high-resolution ophthalmic imaging and sophisticated modelling strategies. Progress is currently limited not mainly by technical issues, but by the lack of large labelled datasets, a sine qua non for deep learning. Such data are derived from prospective epidemiological studies, in which retinal imaging is typically unimodal, cross-sectional, of modest number and relates to cohorts, which are not enriched with subpopulations of interest, such as those with systemic disease. We thus linked longitudinal multimodal retinal imaging from routinely collected National Health Service (NHS) data with systemic disease data from hospital admissions using a privacy-by-design third-party linkage approach. PARTICIPANTS: Between 1 January 2008 and 1 April 2018, 353 157 participants aged 40 years or older, who attended Moorfields Eye Hospital NHS Foundation Trust, a tertiary ophthalmic institution incorporating a principal central site, four district hubs and five satellite clinics in and around London, UK serving a catchment population of approximately six million people. FINDINGS TO DATE: Among the 353 157 individuals, 186 651 had a total of 1 337 711 Hospital Episode Statistics admitted patient care episodes. Systemic diagnoses recorded at these episodes include 12 022 patients with myocardial infarction, 11 735 with all-cause stroke and 13 363 with all-cause dementia. A total of 6 261 931 retinal images of seven different modalities and across three manufacturers were acquired from 1 54 830 patients. The majority of retinal images were retinal photographs (n=1 874 175) followed by optical coherence tomography (n=1 567 358). FUTURE PLANS: AlzEye combines the world's largest single institution retinal imaging database with nationally collected systemic data to create an exceptional large-scale, enriched cohort that reflects the diversity of the population served. First analyses will address cardiovascular diseases and dementia, with a view to identifying hidden retinal signatures that may lead to earlier detection and risk management of these life-threatening conditions

    A foundation model for generalizable disease detection from retinal images

    Get PDF
    Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders 1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications 2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging.</p

    A foundation model for generalizable disease detection from retinal images

    Get PDF
    Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging

    SynthEye: Investigating the Impact of Synthetic Data on Artificial Intelligence-assisted Gene Diagnosis of Inherited Retinal Disease

    Get PDF
    PURPOSE: Rare disease diagnosis is challenging in medical image-based artificial intelligence due to a natural class imbalance in datasets, leading to biased prediction models. Inherited retinal diseases (IRDs) are a research domain that particularly faces this issue. This study investigates the applicability of synthetic data in improving artificial intelligence-enabled diagnosis of IRDs using generative adversarial networks (GANs). DESIGN: Diagnostic study of gene-labeled fundus autofluorescence (FAF) IRD images using deep learning. PARTICIPANTS: Moorfields Eye Hospital (MEH) dataset of 15 692 FAF images obtained from 1800 patients with confirmed genetic diagnosis of 1 of 36 IRD genes. METHODS: A StyleGAN2 model is trained on the IRD dataset to generate 512 × 512 resolution images. Convolutional neural networks are trained for classification using different synthetically augmented datasets, including real IRD images plus 1800 and 3600 synthetic images, and a fully rebalanced dataset. We also perform an experiment with only synthetic data. All models are compared against a baseline convolutional neural network trained only on real data. MAIN OUTCOME MEASURES: We evaluated synthetic data quality using a Visual Turing Test conducted with 4 ophthalmologists from MEH. Synthetic and real images were compared using feature space visualization, similarity analysis to detect memorized images, and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) score for no-reference-based quality evaluation. Convolutional neural network diagnostic performance was determined on a held-out test set using the area under the receiver operating characteristic curve (AUROC) and Cohen's Kappa (κ). RESULTS: An average true recognition rate of 63% and fake recognition rate of 47% was obtained from the Visual Turing Test. Thus, a considerable proportion of the synthetic images were classified as real by clinical experts. Similarity analysis showed that the synthetic images were not copies of the real images, indicating that copied real images, meaning the GAN was able to generalize. However, BRISQUE score analysis indicated that synthetic images were of significantly lower quality overall than real images (P < 0.05). Comparing the rebalanced model (RB) with the baseline (R), no significant change in the average AUROC and κ was found (R-AUROC = 0.86[0.85-88], RB-AUROC = 0.88[0.86-0.89], R-k = 0.51[0.49-0.53], and RB-k = 0.52[0.50-0.54]). The synthetic data trained model (S) achieved similar performance as the baseline (S-AUROC = 0.86[0.85-87], S-k = 0.48[0.46-0.50]). CONCLUSIONS: Synthetic generation of realistic IRD FAF images is feasible. Synthetic data augmentation does not deliver improvements in classification performance. However, synthetic data alone deliver a similar performance as real data, and hence may be useful as a proxy to real data. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references
    • …
    corecore